To solve the multi-label classification problem that a target belongs to multiple classes, a new multi-label classification algorithm based on floating threshold classifiers combination was proposed. Firstly, the theory and error estimation of the AdaBoost algorithm with floating threshold (AdaBoost.FT) were analyzed and discussed, and it was proved that AdaBoost.FT algorithm could overcome the defect of unstabitily when the fixed segmentation threshold classifier was used to classify the points near classifying boundary, the classification accuracy of single-label classification algorithm was improved. And then, the Binary Relevance (BR) method was introduced to apply AdaBoost.FT algorithm into multi-label classification problem, and the multi-label classification algorithm based on floating threshold classifiers combination was presented, namely multi-label AdaBoost.FT. The experimental results show that the average precision of multi-label AdaBoost. FT outperforms the other three multi-label algorithms, AdaBoost.MH (multiclass, multi-label version of AdaBoost based on Hamming loss), ML-kNN (Multi-Label k-Nearest Neighbor), RankSVM (Ranking Support Vector Machine) about 4%, 8%, 11% respectively in Emotions dataset, and is just little worse than RankSVM about 3%, 1% respectively in Scene and Yeast datasets. The experimental analyses show that multi-label AdaBoost. FT can obtain the better classification results in the datasets which have small number of labels or whose different labels are irrelevant.
It is called labels matching problem when two labels of an instance come from two labelsets respectively in multi-label classification, however there is no any specific algorithm for solving such problem. Although the labels matching problem could be solved by tranditional multi-label classification algorithms, but this problem has its own particularity. After analyzing the labels matching problem, a new labels matching algorithm based on pairwise labelsets was proposed using adaptive method, which considered the real Adaptive Boosting (real AdaBoost) and the global optimization idea. This algorithm could learn the rule of labels matching well and complete matching. The experimental results show that, compared with the traditional algorithms, the new algorithm can not only reduce searching scope of the labels space, but also decrease the minimum learning error as the number of weak classifiers increases, and make the classification more accurate and faster.